-
Notifications
You must be signed in to change notification settings - Fork 13
Convert EKFAC tests to pytest #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The keyword parameter `dtype` for AutoModelForCausalLM.from_pretrained does not exist in version 4.54.1 which was present in uv.lock (this parameter used to be called `torch_dtype` which is now a deprecated alias).
Also rework default handling to avoid specifying default values in multiple places.
…ection) This way when we start using pytest, test failures will be properly reported. test_eigenvalue_correction had no explicit criterion for success so I made one up.
df1ef3d to
6928233
Compare
LouisYRYJ
reviewed
Nov 14, 2025
This includes using fixtures for ground truth generation and test configuration, so that we can just do: uv run pytest -sv tests/ekfac_tests and ground truth will be auto-generated.
Ran "uv pre-commit run --all-files" which reads from .pre-commit-config.yaml Unfortunately pre-commit does not respect tool settings in pyproject.toml, so right now there's conflicting informations in pyproject.toml and .pre-commit-config.yaml and so different settings and tool versions used depending on how we run tools.
test_eigenvalue_corrections had to be disabled due to precision errors: h.6.attn.attention.out_proj: max_rel_diff=2.285% h.6.mlp.c_proj: max_rel_diff=3.599% h.7.attn.attention.out_proj: max_rel_diff=4.041% h.7.mlp.c_proj: max_rel_diff=2.204%
It seems the working-directory parameter in the CI config is ignored if pyproject.toml configures pyright, so tweak that instead.
c9a5aff to
c61a2e1
Compare
Overwriting is allowed using the --overwrite flag.
2f2ff46 to
08b9e5e
Compare
1ade090 to
8ef1e38
Compare
Use loss.sum().backward() to avoid scaling the gradients by 1/B (and the covariance matrix by 1/B^2). Without this change, G2/G1 is empirically ~0.2 with the default set of parameters.
This compares the KFAC approximation against the exact FIM computed on a toy model. We intentionally restrict test conditions to avoid exercising issues with padding and last token gradient which are fixed in the next commit.
When batching sequences of different lengths, we pad shorter sequences. These padding positions aren't real data and shouldn't contribute to the FIM. Similarly, the last position of each sequence has no next token to predict. Invalid positions affected both covariances differently. The activation covariance A was contaminated with out-of-distribution activations for padding. The gradient covariance G was underestimated: gradients are zero for invalid positions, but total_processed included them in the denominator. When sample=True, there was a third issue: sampled labels didn't preserve -100 for padding, so G was corrupted with non-zero gradients. The fix computes valid_masks in pad_and_tensor() and uses it to filter activations and restrict loss computation to valid positions.
CovarianceCollector was called without the target_modules parameter, causing it to hook into all MLP layers instead of just the specified target modules. LambdaCollector and the ground truth collectors already had this parameter set correctly.
8ef1e38 to
d3f3e01
Compare
smarter
added a commit
to smarter/bergson
that referenced
this pull request
Jan 16, 2026
This is still missing FSDP support and test_apply_ekfac.py from EleutherAI#68 Co-Authored-By: LouisYRYJ <[email protected]>
smarter
added a commit
to smarter/bergson
that referenced
this pull request
Jan 16, 2026
This is still missing FSDP support and test_apply_ekfac.py from EleutherAI#68 Co-Authored-By: LouisYRYJ <[email protected]>
smarter
added a commit
to smarter/bergson
that referenced
this pull request
Jan 16, 2026
This is still missing FSDP support and test_apply_ekfac.py from EleutherAI#68 Co-Authored-By: LouisYRYJ <[email protected]>
smarter
added a commit
to smarter/bergson
that referenced
this pull request
Jan 16, 2026
This is still missing FSDP support and test_apply_ekfac.py from EleutherAI#68 Co-Authored-By: LouisYRYJ <[email protected]>
smarter
added a commit
to smarter/bergson
that referenced
this pull request
Jan 16, 2026
This is still missing FSDP support and test_apply_ekfac.py from EleutherAI#68 Co-Authored-By: LouisYRYJ <[email protected]>
LouisYRYJ
added a commit
that referenced
this pull request
Jan 16, 2026
* Fix mask bug and add batch size invariance test wih toy model The backward_hook was using g.reshape(-1, O) which includes padding positions in the covariance computation. This causes incorrect results when batches have different sequence lengths. Before this commit, the added test failed with: > FAILED tests/ekfac_tests/test_batch_size_invariance.py::test_trace_batch_invariant[seq_lengths1-20] - AssertionError: Scalars are not close! > > Expected 1.231401894309304 but got 0.8983965093439276. > Absolute difference: 0.33300538496537635 (up to 1e-4 allowed) > Relative difference: 0.27042786478102654 (up to 0.01 allowed) * Fix use_dataset_labels condition and add FIM accuracy test The condition `if not hessian_cfg.use_dataset_labels:` was inverted, causing the empirical Fisher (with dataset labels) to use sampled labels and vice versa. Add test_fim_accuracy.py which verifies that KFAC approximates the Fisher Information Matrix within tolerance for both empirical FIM (dataset labels) and true FIM (sampled labels). * Add ground truth ekfac tests This is still missing FSDP support and test_apply_ekfac.py from #68 Co-Authored-By: LouisYRYJ <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This includes using fixtures for ground truth generation and test configuration,
so that we can just do:
uv run pytest -sv tests/ekfac_tests
and ground truth will be auto-generated.
Additionally, this required adding a pass/fail threshold to tests/ekfac_tests/test_eigenvalue_correction.py.
I haven't tested test_apply_ekfac yet.